48 research outputs found
Security problems of systems of extremely weak devices
In this paper we discuss some fundamental security issues of distributed systems of weak devices.
We briefly describe two extreme kinds of such systems - the sensor network and theRadio
Frequency IDentification (RFID) system from the point of view of security mechanisms
designer. We describe some most important particularities and issues (including unsolved
problems) that have to be taken into account in security design and analysis. Finally we
present some fundamental concepts and paradigms of research on security of weak devices. In
the paper we also give a brief survey of ultra–light HB/HB+ - family of encryption schemes
and so-called predistribution protocols
What Do Our Choices Say About Our Preferences?
Taking online decisions is a part of everyday life. Think of buying a house,
parking a car or taking part in an auction. We often take those decisions
publicly, which may breach our privacy - a party observing our choices may
learn a lot about our preferences. In this paper we investigate the online
stopping algorithms from the privacy preserving perspective, using a
mathematically rigorous differential privacy notion.
In differentially private algorithms there is usually an issue of balancing
the privacy and utility. In this regime, in most cases, having both optimality
and high level of privacy at the same time is impossible. We propose a natural
mechanism to achieve a controllable trade-off, quantified by a parameter,
between the accuracy of the online algorithm and its privacy. Depending on the
parameter, our mechanism can be optimal with weaker differential privacy or
suboptimal, yet more privacy-preserving. We conduct a detailed accuracy and
privacy analysis of our mechanism applied to the optimal algorithm for the
classical secretary problem. Thereby the classical notions from two distinct
areas - optimal stopping and differential privacy - meet for the first time.Comment: 22 pages, 6 figure
Dynamic sharing of a multiple access channel
In this paper we consider the mutual exclusion problem on a multiple access
channel. Mutual exclusion is one of the fundamental problems in distributed
computing. In the classic version of this problem, n processes perform a
concurrent program which occasionally triggers some of them to use shared
resources, such as memory, communication channel, device, etc. The goal is to
design a distributed algorithm to control entries and exits to/from the shared
resource in such a way that in any time there is at most one process accessing
it. We consider both the classic and a slightly weaker version of mutual
exclusion, called ep-mutual-exclusion, where for each period of a process
staying in the critical section the probability that there is some other
process in the critical section is at most ep. We show that there are channel
settings, where the classic mutual exclusion is not feasible even for
randomized algorithms, while ep-mutual-exclusion is. In more relaxed channel
settings, we prove an exponential gap between the makespan complexity of the
classic mutual exclusion problem and its weaker ep-exclusion version. We also
show how to guarantee fairness of mutual exclusion algorithms, i.e., that each
process that wants to enter the critical section will eventually succeed
Probabilistic Counters for Privacy Preserving Data Aggregation
Probabilistic counters are well known tools often used for space-efficient
set cardinality estimation. In this paper we investigate probabilistic counters
from the perspective of preserving privacy. We use standard, rigid differential
privacy notion. The intuition is that the probabilistic counters do not reveal
too much information about individuals, but provide only general information
about the population. Thus they can be used safely without violating privacy of
individuals. It turned out however that providing a precise, formal analysis of
privacy parameters of probabilistic counters is surprisingly difficult and
needs advanced techniques and a very careful approach.
We demonstrate also that probabilistic counters can be used as a privacy
protecion mechanism without any extra randomization. That is, the inherit
randomization from the protocol is sufficient for protecting privacy, even if
the probabilistic counter is used many times. In particular we present a
specific privacy-preserving data aggregation protocol based on a probabilistic
counter. Our results can be used for example in performing distributed surveys